1,898 research outputs found

    A comment on "The effect of a common currency on the volatility of the extensive margin of trade"

    Get PDF
    In this paper I comment on Auray, Eyquem, and Pontineau (2012). I show that their introduction of sticky-prices into Ghironi \& Melitz (2005) framework is incorrect and generates a bias in simulation results. Additionally, I find that, by introducing sticky-prices into Ghironi \& Melitz (2005) framework in a correct way, the model is able to account for the empirical findings of Auray, Eyquem, and Pontineau (2012). Finally, I also find that if central banks target a data-consistent CPI inflation, results improve quantitatively.Pricing-to-market; Local currency pricing; Extensive Margin; Monetary Union; Monetary Policy

    Distributed learning of CNNs on heterogeneous CPU/GPU architectures

    Get PDF
    Convolutional Neural Networks (CNNs) have shown to be powerful classification tools in tasks that range from check reading to medical diagnosis, reaching close to human perception, and in some cases surpassing it. However, the problems to solve are becoming larger and more complex, which translates to larger CNNs, leading to longer training times that not even the adoption of Graphics Processing Units (GPUs) could keep up to. This problem is partially solved by using more processing units and distributed training methods that are offered by several frameworks dedicated to neural network training. However, these techniques do not take full advantage of the possible parallelization offered by CNNs and the cooperative use of heterogeneous devices with different processing capabilities, clock speeds, memory size, among others. This paper presents a new method for the parallel training of CNNs that can be considered as a particular instantiation of model parallelism, where only the convolutional layer is distributed. In fact, the convolutions processed during training (forward and backward propagation included) represent from 6060-9090\% of global processing time. The paper analyzes the influence of network size, bandwidth, batch size, number of devices, including their processing capabilities, and other parameters. Results show that this technique is capable of diminishing the training time without affecting the classification performance for both CPUs and GPUs. For the CIFAR-10 dataset, using a CNN with two convolutional layers, and 500500 and 15001500 kernels, respectively, best speedups achieve 3.28×3.28\times using four CPUs and 2.45×2.45\times with three GPUs. Modern imaging datasets, larger and more complex than CIFAR-10 will certainly require more than 6060-9090\% of processing time calculating convolutions, and speedups will tend to increase accordingly

    Development of an electronic scholar notebook for students with special needs

    Get PDF
    Comunicação apresentada na DSAI - International Conference on Software Development for Enhancing Accessibility and Fighting Info-exclusion, Vila Real (Portugal), 8-9 Novembro 2007.The Salamanca Declaration promotes the integration of students with special needs in the regular education. To achieve this goal is fundamental to assist these students with different mechanisms some of which technology based. Students with motor difficulties face obstacles of diverse order like the execution of tasks that require handwriting (e.g. copies, dictations and worksheet resolution). Some of these students use portable computers equipped with text processors and rate enhancement systems that accelerate writing in the computer. However, our experience says that these tools are not enough. The management of the produced information has revealed itself a very challenging task for these children. Therefore we think that is of paramount importance the development of an application that helps students in the management of all produced information. This article reports the design of a digital scholar note book that can constitute an effective alternative to its traditional counterpart

    A logic for n-dimensional hierarchical refinement

    Full text link
    Hierarchical transition systems provide a popular mathematical structure to represent state-based software applications in which different layers of abstraction are represented by inter-related state machines. The decomposition of high level states into inner sub-states, and of their transitions into inner sub-transitions is common refinement procedure adopted in a number of specification formalisms. This paper introduces a hybrid modal logic for k-layered transition systems, its first-order standard translation, a notion of bisimulation, and a modal invariance result. Layered and hierarchical notions of refinement are also discussed in this setting.Comment: In Proceedings Refine'15, arXiv:1606.0134

    Distribution-Based Categorization of Classifier Transfer Learning

    Get PDF
    Transfer Learning (TL) aims to transfer knowledge acquired in one problem, the source problem, onto another problem, the target problem, dispensing with the bottom-up construction of the target model. Due to its relevance, TL has gained significant interest in the Machine Learning community since it paves the way to devise intelligent learning models that can easily be tailored to many different applications. As it is natural in a fast evolving area, a wide variety of TL methods, settings and nomenclature have been proposed so far. However, a wide range of works have been reporting different names for the same concepts. This concept and terminology mixture contribute however to obscure the TL field, hindering its proper consideration. In this paper we present a review of the literature on the majority of classification TL methods, and also a distribution-based categorization of TL with a common nomenclature suitable to classification problems. Under this perspective three main TL categories are presented, discussed and illustrated with examples

    NoSQL no suporte à análise de grande volume de dados

    Get PDF
    Nos tempos atuais de sistemas de informação há muitos dados que são gerados a cada instante que se traduz em grandes volumes de informação. Interpretar esses dados pode dar às empresas vantagens competitivas face à concorrência. A área de Business Intelligence fornece às empresas mecanismos para analise desses dados. No entanto, os projetos de software que implementam estas soluções implicam uma grande maturidade dos requisitos. Na prática, os clientes necessitam saber que dados são importantes de serem extraídos. Alterações às análises pretendidas são habituais, no entanto implicam, muitas vezes, recursos técnicos para avaliar e evoluir a forma como os dados são tratados, aumentando a duração do projeto e, consequentemente, o respetivo custo. Este artigo vem no sentido de permitir ao utilizador final escolher uma lista de dados e dar a liberdade para organizar a informação como pretender, nomeadamente ao nível do agrupamento e ordenação de informação, isto sem a intervenção técnica para adaptar a solução às necessidades de cada utilizador.Current information systems have large amount of data being generated any time. A better understanding of the data can give companies competitive advantage over their rivalry. Business Intelligence offers companies mechanisms for them to analyse that data, although those software projects require maturity in the requirements specification. Changes to the requirements are usual, which have a big impact on the project cost and duration. This paper presents a solution that allows the user to specify a source of information and give him the freedom to organize the information, regards grouping and sorting, without technical skills required

    The Influence of Image Normalization in Mammographic Classification with CNNs

    Get PDF
    In order to improve the performance of Convolutional Neural Networks (CNN) in the classification of mammographic images, many researchers choose to apply a normalization method during the pre-processing stage. In this work, we aim to assess the impact of six different normalization methods in the classification performance of two CNNs. Results allow us to concluded that the effect of image normalization in the performance of the CNNs depends of which network is chosen to make the lesion classification; besides, the normalization method that seems to have the most positive impact is the one that subtracts the image mean and divide it by the corresponding standard deviation (best AUC mean with CNN-F = 0.786 and with Caffe = 0.790; best run AUC result was 0.793 with CNN-F and 0.791 with Caffe).info:eu-repo/semantics/publishedVersio
    corecore